Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add filters








Language
Year range
1.
Chinese Journal of Digestive Endoscopy ; (12): 534-538, 2023.
Article in Chinese | WPRIM | ID: wpr-995410

ABSTRACT

Objective:To evaluate deep learning for differentiating invasion depth of colorectal adenomas under image enhanced endoscopy (IEE).Methods:A total of 13 246 IEE images from 3 714 lesions acquired from November 2016 to June 2021 were retrospectively collected in Renmin Hospital of Wuhan University, Shenzhen Hospital of Southern Medical University and the First Hospital of Yichang to construct a deep learning model to differentiate submucosal deep invasion and non-submucosal deep invasion lesions of colorectal adenomas. The performance of the deep learning model was validated in an independent test and an external test. The full test was used to compare the diagnostic performance between 5 endoscopists and the deep learning model. A total of 35 videos were collected from January to June 2021 in Renmin Hospital of Wuhan University to validate the diagnostic performance of the endoscopists with the assistance of deep learning model.Results:The accuracy and Youden index of the deep learning model in image test set were 93.08% (821/882) and 0.86, which were better than those of endoscopists [the highest were 91.72% (809/882) and 0.78]. In video test set, the accuracy and Youden index of the model were 97.14% (34/35) and 0.94. With the assistance of the model, the accuracy of endoscopists was significantly improved [the highest was 97.14% (34/35)].Conclusion:The deep learning model obtained in this study could identify submucosal lesions with deep invasion accurately for colorectal adenomas, and could improve the diagnostic accuracy of endoscopists.

2.
Chinese Journal of Digestive Endoscopy ; (12): 295-300, 2022.
Article in Chinese | WPRIM | ID: wpr-934107

ABSTRACT

Objective:To construct a deep learning-based artificial intelligence endoscopic ultrasound (EUS) bile duct scanning substation system to assist endoscopists in learning multi-station imaging and improve their operation skills.Methods:A total of 522 EUS videos in Renmin Hospital of Wuhan University and Wuhan Union Hospital from May 2016 to October 2020 were collected, and images were captured from these videos, including 3 000 white light images and 31 003 EUS images from Renmin Hospital of Wuhan University, and 799 EUS images from Wuhan Union Hospital. The pictures were divided into training set and test set in the EUS bile duct scanning system. The system included filtering model of white light gastroscopy images (model 1), distinguishing model of standard station images and non-standard station images (model 2) and substation model of EUS bile duct scanning standard images (model 3), which were used to classify the standard images into liver window, stomach window, duodenal bulb window, and duodenal descending window. Then 110 pictures were randomly selected from the test set for a man-machine competition to compare the accuracy of multi-station imaging by experts, advanced endoscopists and the artificial intelligence model.Results:The accuracies of model 1 and model 2 were 100.00% (1 200/1 200) and 93.36% (2 938/3 147) respectively. Those of model 3 on the internal validation dataset in each classification were 97.23% (1 687/1 735) in liver window, 96.89% (1 681/1 735) in stomach window, 98.73% (1 713/1 735) in duodenal bulb window, and 97.18% (1 686/1 735) in duodenal descending window. And those on the external validation dataset were 89.61% (716/799) in liver window, 92.74% (741/799) in stomach window, 90.11% (720/799) in duodenal bulb window, and 92.24% (737/799) in duodenal descending window. In the man-machine competition, the accuracy of the substation model was 89.09% (98/110), which was higher than that of senior endoscopists [85.45% (94/110), 74.55% (82/110), and 85.45% (94/110)] and close to the level of experts [92.73% (102/110) and 90.00% (99/110)].Conclusion:The deep learning-based EUS bile duct scanning system constructed in the current study can assist endoscopists to perform standard multi-station scanning in real time more accurately and improve the completeness and quality of EUS.

3.
Chinese Journal of Digestive Endoscopy ; (12): 133-138, 2022.
Article in Chinese | WPRIM | ID: wpr-934086

ABSTRACT

Objective:To evaluate the intelligent gastrointestinal endoscopy quality control system in gastroscopy.Methods:Fourteen endoscopists from Renmin Hospital of Wuhan University were assigned to the quality-control group and the control group by the random number table. In the pre-quality-control stage (from April 20, 2019 to May 31, 2019), data of gastroscopies performed by the enrolled endoscopists were collected. In the training stage (June 1 to 30, 2019), the quality-control group was trained in quality control knowledge and the instructions of intelligent gastrointestinal endoscopy quality control system; but the control group was only trained in quality control knowledge. In the post-quality-control stage (from July 1, 2019 to August 20, 2019), a quality report was submitted weekly to the endoscopists in the quality-control group with a review and feedback, while the control group had no quality control report. Simultaneously, the gastroscopies performed by the enrolled endoscopists were collected during the period. Changes of precancerous lesion detection rate in the two groups were compared.Results:Seven endoscopists were assigned to each group. A total of 3 446 gastroscopies were included in the pre-quality-control stage ( n=1 651, including 753 cases in the quality-control group and 898 cases in the control group) and post-quality-control stage (n=1 795, including 892 cases in the quality-control group and 903 cases in the control group). The detection rate of precancerous lesions in the quality-control group increased by 3.6% [3.3% (29/892) VS 6.9% (52/753), χ2=11.65, P<0.01], while that of the control group increased by 0.4% [3.3% (30/903) VS 3.7% (33/898), χ2=0.17, P=0.684]. Conclusion:The intelligent gastrointestinal endoscopy quality control system with a review and feedback could monitor and improve the quality of gastroscopy.

4.
Chinese Journal of Digestion ; (12): 464-469, 2022.
Article in Chinese | WPRIM | ID: wpr-958335

ABSTRACT

Objective:To construct a deep learning-based diagnostic system for gastrointestinal submucosal tumor (SMT) under endoscopic ultrasonography (EUS), so as to help endoscopists diagnose SMT.Methods:From January 1, 2019 to December 15, 2021, at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University, 245 patients with SMT confirmed by pathological diagnosis who underwent EUS and endoscopic submucosal dissection were enrolled. A total of 3 400 EUS images were collected. Among the images, 2 722 EUS images were used for training of lesion segmentation model, while 2 209 EUS images were used for training of stromal tumor and leiomyoma classification model; 283 and 191 images were selected as independent test sets to evaluate lesion segmentation model and classification model, respectively. Thirty images were selected as an independent data set for human-machine competition to compare the lesion classification accuracy between lesion classification models and 6 endoscopists. The performance of the segmentation model was evaluated by indexes such as Intersection-over-Union and Dice coefficient. The performance of the classification model was evaluated by accuracy. Chi-square test was used for statistical analysis.Results:The average Intersection-over-Union and Dice coefficient of lesion segmentation model were 0.754 and 0.835, respectively, and the accuracy, recall and F1 score were 95.2%, 98.9% and 97.0%, respectively. Based on the lesion segmentation, the accuracy of classification model increased from 70.2% to 92.1%. The results of human-machine competition showed that the accuracy of classification model in differential diagnosis of stromal tumor and leiomyoma was 86.7% (26/30), which was superior to that of 4 out of the 6 endoscopists(56.7%, 17/30; 56.7%, 17/30; 53.3%, 16/30; 60.0%, 18/30; respectively), and the differences were statistically significant( χ2=7.11, 7.36, 8.10, 6.13; all P<0.05). There was no significant difference between the accuracy of the other 2 endoscopists(76.7%, 23/30; 73.3%, 22/30; respectively) and model(both P<0.05). Conclusion:This system could be used for the auxiliary diagnosis of SMT under ultrasonic endoscope in the future, and to provide a powerful evidence for the selection of subsequent treatment decisions.

5.
Chinese Journal of Digestive Endoscopy ; (12): 795-800, 2021.
Article in Chinese | WPRIM | ID: wpr-912175

ABSTRACT

Objective:To evaluate the intelligent performance measurement system for colonoscopy.Methods:Nine endoscopists from Renmin Hospital of Wuhan University were randomly assigned to the quality control group and the control group based on inclusion and exclusion criteria by the random number table. In the pre-quality-control stage (from April 20, 2019 to May 30, 2019), colonoscopic data acquired by the enrolled endoscopists were collected. In the training stage (June 1-30, 2019), the quality control group was trained on the quality control knowledge and the use of intelligent gastrointestinal endoscopy performance measurement system; but the control group was only trained on the quality control knowledge.In the post-quality-control stage (from July 1, 2019 to August 20, 2019), a weekly quality feedback was given to endoscopists of the quality control group, while the endoscopists of the control group had no quality control report.Then, the colonoscopic data acquired by enrolled endoscopists were prospectively collected during the period. The primary endpoint was adenoma detection rate. The secondary endpoints were withdrawal time, polyp detection rate and cecal intubation rate.Results:Four endoscopists were assigned to the quality control group and five to the control group. A total of 1 471 colonoscopic procedures were analyzed. The detection rates of adenoma and polyp in the quality control group increased with feedbacks[13.7% (45/329) VS 7.1% (24/338), χ2=55.796, P<0.001; 48.9% (161/329) VS 40.2% (136/338), χ2=4.071, P=0.044], while there were no significant differences in the control group [9.3% (37/398) VS 9.1% (37/406), χ2=0.329, P=0.566; 33.9% (135/398) VS 33.0% (134/496), χ2=3.616, P=0.057]. The withdrawal time in the quality control group increased with feedbacks[5.9 (3.9, 7.3) min VS 4.1 (2.8, 6.1) min, Z=6.965, P<0.001], while there was no significant difference in this variable in the control group [3.9 (2.7, 6.1) min VS 3.6 (2.6, 5.8) min, Z=1.355, P=0.175]. Conclusion:The intelligent performance measurement system for gastrointestinal endoscopy with feedbacks can monitor and improve the colonoscopic quality.

6.
Chinese Journal of Digestive Endoscopy ; (12): 778-782, 2021.
Article in Chinese | WPRIM | ID: wpr-912172

ABSTRACT

Objective:To develop an endoscopic ultrasonography (EUS) station recognition and pancreatic segmentation system based on deep learning and to validate its efficacy.Methods:Data of 269 EUS procedures were retrospectively collected from Renmin Hospital of Wuhan University between December 2016 and December 2019, and were divided into 3 datasets: (1)Dataset A of 205 procedures for model training containing 16 305 images for classification training and 1 953 images for segmentation training; (2)Dataset B of 44 procedures for model testing containing 1 606 images for classification testing and 480 images for segmentation testing; (3) Dataset C of 20 procedures with 150 images for comparing the performance between models and endoscopists. EUS experts (with more than 10 years of experience) A and B classified and labeled all images of dataset A, B and C through discussion, and the results were used as the gold standard. EUS expert C and senior EUS endoscopists (with more than 5 years of experience) D and E classified and labeled the images in dataset C, and the results were used for comparison with model. The main outcomes included accuracy of classification, Dice (F1 score) of segmentation and Cohen Kappa coefficient of consistency analysis.Results:In test dataset B, the model achieved a mean accuracy of 94.1% in classification. The mean Dice of pancreatic and vascular segmentation were 0.826 and 0.841 respectively. In dataset C, the classification accuracy of the model reached 90.0%. The classification accuracy of expert C, senior endoscopist D and E were 89.3%, 88.7% and 87.3%, respectively. The Dice of pancreatic and vascular segmentation in the model were 0.740 and 0.859, 0.708 and 0.778 for expert C, 0.747 and 0.875 for senior endoscopist D, and 0.774 and 0.789 for senior endoscopist E. The model was comparable to the expert level.Consistency analysis showed that there was high consistency between the model and endoscopists (the Kappa coefficient was 0.823 between model and expert C, 0.840 between model and senior endoscopist D, and 0.799 between model and senior endoscopist E).Conclusion:EUS station classification and pancreatic segmentation system based on deep learning can be used for quality control of pancreatic EUS, with a comparable performance of classification and segmentation to that of EUS experts.

7.
Chinese Journal of Digestive Endoscopy ; (12): 107-114, 2021.
Article in Chinese | WPRIM | ID: wpr-885700

ABSTRACT

Objective:To construct an intelligent performance measurement system of gastrointestinal endoscopy and to analyze its value for endoscopic quality improvement.Methods:The intelligent gastrointestinal endoscopy performance measurement system was developed by using the deep convolutional neural network (DCNN) and deep reinforcement learning, based on the Digital Imaging and Communications in Medicine. Images were acquired of patients undergoing gastrointestinal endoscopy at Digestive Endoscopy Center of Renmin Hospital of Wuhan University from December 2016 to October 2018. The system applied cecum recognition model (DCNN1), images in vitro and in vivo recognition model (DCNN2), and identification model at 26 gastric sites (DCNN3) to monitor indices such as cecal intubation rate, colonoscopic withdrawal time, gastroscopic inspection time, and gastroscopic coverage. Images of 83 gastroscopies and 205 colonoscopies acquired at Digestive Endoscopy Center of Renmin Hospital of Wuhan University from March to November 2019 were randomly selected to examine the effectiveness of the system. Results:The intelligent gastrointestinal endoscopy performance measurement system consisted of quality analysis of both gastroscopy and colonoscopy, including all indices, and could be generated automatically at any time. The accuracy for cecal intubation rate, colonoscopic withdrawal time, gastroscopic inspection time, and gastroscopic coverage were 92.5% (172/186), 91.7% (188/205), 100.0% (83/83), 89.3% (1 928/2 158), respectively.Conclusion:The intelligent performance measurement system for gastrointestinal endoscopy can be recommended for the quality control of gastrointestinal endoscopy, from which endoscopists can get feedback and improve the quality of gastrointestinal endoscopy.

8.
Chinese Journal of Digestive Endoscopy ; (12): 648-651, 2008.
Article in Chinese | WPRIM | ID: wpr-381431

ABSTRACT

Objective To evaluate the expression of substance P (SP) and caltenin gene related peptide (CGRP) in esophagueal mucosa from patients with non-erosive gnstroesophageal reflux disease (NERD) and reflux esophagitis (RE) and to explore their role in the development of NERD. Methods Fif-ty-one patients with typical symptoms of gnstroesophageal reflux disease (GERD) were evaluated with reflux disease questionnaire (RDQ), PPI test, endoscopy and 24hr esophageal pH monitoring. The patients were then divided into RE group (n = 21), NERD group with acid refluux (NERD+, n = 12) and NERD group without acid reflux (NERD-, n = 18) according to the evaluation results. The expression of SP and CGRP in esophagus mucosa from these patients and 10 healthy control subjects were assayed by immunohistochemis-try, and the stain positive index (PI) was calculated by Color patho-image analysis software and compared. Results The PIs of SP and CGRP in NERD- group were 96.77±31.74 and 24.76±29.15, respectively, which were significantly higher than those of NERD+ group (73.64±31.38, 9.78±10.30, respectively, P < 0.05), RE group (67.56±34.62, 9.61±6.20, respectively, P < 0.05) and control group (59.82± 46.15, 8.64±12.12, respectively, P < 0.05). Conclusion Expressions of SP and CGRP in esophagus mucosa from NERD patients without detectable acid reflux are significantly increased, they may play an im-portant role in esophageal visceral sensitivity.

SELECTION OF CITATIONS
SEARCH DETAIL